import os
from IPython.display import display, HTML
ModuleFolder='C:\\Users\\Gamaliel\\Documents\\G\\ADD\\IBM_DS\\Py-Databases-SQL-DS\\IBM\\NBs\\'
os.chdir(ModuleFolder)
import os
ModuleFolder='C:\\Users\\Gamaliel\\Documents\\G\\ADD\\IBM_DS\\Py-Databases-SQL-DS\\IBM\\NBs\\M03'
os.chdir(ModuleFolder)
#Install & load sqlite3
#!pip install sqlite3 ##Uncomment this code only if you are working in a local environment to install sqlite3
import sqlite3
# Connecting to sqlite
# connection object
conn = sqlite3.connect('INSTRUCTOR.db')
Cursor class is an instance using which you can invoke methods that execute SQLite statements, fetch data from the result sets of the queries. You can create Cursor object using the cursor() method of the Connection object/class.
# cursor object
cursor_obj = conn.cursor()
Task 2: Create a table in the database¶
In this step we will create a table in the database with following details:

Before creating a table, let's first check if the table already exists or not. To drop the table from a database, use the DROP query. A cursor is an object that helps execute the query and fetch the records from the database.
# Drop the table if already exists.
cursor_obj.execute("DROP TABLE IF EXISTS INSTRUCTOR")
<sqlite3.Cursor at 0x17f3c38d740>
Dont worry if you get this error:¶
If you see an exception/error similar to the following, indicating that INSTRUCTOR is an undefined name, that's okay. It just implies that the INSTRUCTOR table does not exist in the table - which would be the case if you had not created it previously.
Exception: \[IBM]\[CLI Driver]\[DB2/LINUXX8664] SQL0204N "ABC12345.INSTRUCTOR" is an undefined name. SQLSTATE=42704 SQLCODE=-204
# Creating table
table = """ create table IF NOT EXISTS INSTRUCTOR(ID INTEGER PRIMARY KEY NOT NULL, FNAME VARCHAR(20), LNAME VARCHAR(20), CITY VARCHAR(20), CCODE CHAR(2));"""
cursor_obj.execute(table)
print("Table is Ready")
Table is Ready
Task 3: Insert data into the table¶
In this step we will insert some rows of data into the table.
The INSTRUCTOR table we created in the previous step contains 3 rows of data:

We will start by inserting just the first row of data, i.e. for instructor Rav Ahuja
cursor_obj.execute('''insert into INSTRUCTOR values (1, 'Rav', 'Ahuja', 'TORONTO', 'CA')''')
<sqlite3.Cursor at 0x17f3c38d740>
The output you will get something as: sqlite3.Cursor at 0x27a1a491260 which means mySql database has sqlite3.Cursor object at 0x27a1a49126 as output in table. But you may get the different number.
Now use a single query to insert the remaining two rows of data
cursor_obj.execute('''insert into INSTRUCTOR values (2, 'Raul', 'Chong', 'Markham', 'CA'), (3, 'Hima', 'Vasudevan', 'Chicago', 'US')''')
<sqlite3.Cursor at 0x17f3c38d740>
Task 4: Query data in the table¶
In this step we will retrieve data we inserted into the INSTRUCTOR table.
statement = '''SELECT * FROM INSTRUCTOR'''
cursor_obj.execute(statement)
print("All the data")
output_all = cursor_obj.fetchall()
for row_all in output_all:
print(row_all)
# This is the list returned by output_all
print(output_all)
All the data (1, 'Rav', 'Ahuja', 'TORONTO', 'CA') (2, 'Raul', 'Chong', 'Markham', 'CA') (3, 'Hima', 'Vasudevan', 'Chicago', 'US') [(1, 'Rav', 'Ahuja', 'TORONTO', 'CA'), (2, 'Raul', 'Chong', 'Markham', 'CA'), (3, 'Hima', 'Vasudevan', 'Chicago', 'US')]
## Fetch few rows from the table
statement = '''SELECT * FROM INSTRUCTOR'''
cursor_obj.execute(statement)
print("All the data")
# If you want to fetch few rows from the table we use fetchmany(numberofrows) and mention the number how many rows you want to fetch
output_many = cursor_obj.fetchmany(2)
for row_many in output_many:
print(row_many)
All the data (1, 'Rav', 'Ahuja', 'TORONTO', 'CA') (2, 'Raul', 'Chong', 'Markham', 'CA')
# Fetch only FNAME from the table
statement = '''SELECT FNAME FROM INSTRUCTOR'''
cursor_obj.execute(statement)
print("All the data")
output_column = cursor_obj.fetchall()
for fetch in output_column:
print(fetch)
All the data
('Rav',)
('Raul',)
('Hima',)
Bonus: now write and execute an update statement that changes the Rav's CITY to MOOSETOWN</strong>
query_update='''update INSTRUCTOR set CITY='MOOSETOWN' where FNAME="Rav"'''
cursor_obj.execute(query_update)
<sqlite3.Cursor at 0x17f3c38d740>
statement = '''SELECT * FROM INSTRUCTOR'''
cursor_obj.execute(statement)
print("All the data")
output1 = cursor_obj.fetchmany(2)
for row in output1:
print(row)
All the data (1, 'Rav', 'Ahuja', 'MOOSETOWN', 'CA') (2, 'Raul', 'Chong', 'Markham', 'CA')
Task 5: Retrieve data into Pandas¶
In this step we will retrieve the contents of the INSTRUCTOR table into a Pandas dataframe
#!pip install pandas
import pandas as pd
#retrieve the query results into a pandas dataframe
df = pd.read_sql_query("select * from instructor;", conn)
#print the dataframe
df
| ID | FNAME | LNAME | CITY | CCODE | |
|---|---|---|---|---|---|
| 0 | 1 | Rav | Ahuja | MOOSETOWN | CA |
| 1 | 2 | Raul | Chong | Markham | CA |
| 2 | 3 | Hima | Vasudevan | Chicago | US |
#print just the LNAME for first row in the pandas data frame
df.LNAME[0]
'Ahuja'
Once the data is in a Pandas dataframe, you can do the typical pandas operations on it.
For example you can use the shape method to see how many rows and columns are in the dataframe
df.shape
(3, 5)
Task 6: Close the Connection¶
We free all resources by closing the connection. Remember that it is always important to close connections so that we can avoid unused connections taking up resources.
# Close the connection
conn.close()
#%pwd
#%ls
try:
%load_ext sql
except Exception as E:
%reload_ext sql
import csv, sqlite3
Selected Socioeconomic Indicators in Chicago¶
The city of Chicago released a dataset of socioeconomic data to the Chicago City Portal. This dataset contains a selection of six socioeconomic indicators of public health significance and a “hardship index,” for each Chicago community area, for the years 2008 – 2012.
Scores on the hardship index can range from 1 to 100, with a higher index number representing a greater level of hardship.
A detailed description of the dataset can be found on the city of Chicago's website, but to summarize, the dataset has the following variables:
Community Area Number (
ca): Used to uniquely identify each row of the datasetCommunity Area Name (
community_area_name): The name of the region in the city of ChicagoPercent of Housing Crowded (
percent_of_housing_crowded): Percent of occupied housing units with more than one person per roomPercent Households Below Poverty (
percent_households_below_poverty): Percent of households living below the federal poverty linePercent Aged 16+ Unemployed (
percent_aged_16_unemployed): Percent of persons over the age of 16 years that are unemployedPercent Aged 25+ without High School Diploma (
percent_aged_25_without_high_school_diploma): Percent of persons over the age of 25 years without a high school educationPercent Aged Under 18 or Over 64:Percent of population under 18 or over 64 years of age (
percent_aged_under_18_or_over_64): (ie. dependents)Per Capita Income (
per_capita_income_): Community Area per capita income is estimated as the sum of tract-level aggragate incomes divided by the total populationHardship Index (
hardship_index): Score that incorporates each of the six selected socioeconomic indicators
In this Lab, we'll take a look at the variables in the socioeconomic indicators dataset and do some basic analysis with Python.
Connect to the database¶
Let us first load the SQL extension and establish a connection with the database
The syntax for connecting to magic sql using sqllite is¶
%sql sqlite://DatabaseName
where DatabaseName will be your .db file
#!pip install ipython-sql
#!pip install seaborn
import seaborn as sns
%load_ext sql
The sql extension is already loaded. To reload it, use: %reload_ext sql
import csv, sqlite3
con = sqlite3.connect("socioeconomic.db")
cur = con.cursor()
#!pip install pandas
%sql sqlite:///socioeconomic.db
Store the dataset in a Table¶
In many cases the dataset to be analyzed is available as a .CSV (comma separated values) file, perhaps on the internet. To analyze the data using SQL, it first needs to be stored in the database.¶
We will first read the csv files from the given url into pandas dataframes¶
Next we will be using the df.to_sql() function to convert each csv file to a table in sqlite with the csv data loaded in it.¶
import pandas
df = pandas.read_csv('https://data.cityofchicago.org/resource/jcxq-k9xf.csv')
df.to_sql("chicago_socioeconomic_data", con, if_exists='replace', index=False,method="multi")
78
# Install the 'ipython-sql' and 'prettytable' libraries using pip
#!pip install ipython-sql prettytable
# Import the 'prettytable' library, which is used to display data in a formatted table
import prettytable
# Set the default display format for prettytable to 'DEFAULT' (i.e., a simple table format)
prettytable.DEFAULT = 'DEFAULT'
You can verify that the table creation was successful by making a basic query like:¶
%sql SELECT * FROM chicago_socioeconomic_data limit 5;
* sqlite:///socioeconomic.db Done.
| ca | community_area_name | percent_of_housing_crowded | percent_households_below_poverty | percent_aged_16_unemployed | percent_aged_25_without_high_school_diploma | percent_aged_under_18_or_over_64 | per_capita_income_ | hardship_index |
|---|---|---|---|---|---|---|---|---|
| 1.0 | Rogers Park | 7.7 | 23.6 | 8.7 | 18.2 | 27.5 | 23939 | 39.0 |
| 2.0 | West Ridge | 7.8 | 17.2 | 8.8 | 20.8 | 38.5 | 23040 | 46.0 |
| 3.0 | Uptown | 3.8 | 24.0 | 8.9 | 11.8 | 22.2 | 35787 | 20.0 |
| 4.0 | Lincoln Square | 3.4 | 10.9 | 8.2 | 13.4 | 25.5 | 37524 | 17.0 |
| 5.0 | North Center | 0.3 | 7.5 | 5.2 | 4.5 | 26.2 | 57123 | 6.0 |
Problems¶
Problem 1¶
How many rows are in the dataset?¶
%sql SELECT COUNT(*) FROM chicago_socioeconomic_data;
* sqlite:///socioeconomic.db Done.
| COUNT(*) |
|---|
| 78 |
Click here for the solution
```python %sql SELECT COUNT(*) FROM chicago_socioeconomic_data; Correct answer: 78 ```Problem 2¶
How many community areas in Chicago have a hardship index greater than 50.0?¶
%sql SELECT COUNT(*) as count FROM chicago_socioeconomic_data WHERE hardship_index > 50
* sqlite:///socioeconomic.db Done.
| count |
|---|
| 38 |
Click here for the solution
```python %sql SELECT COUNT(*) FROM chicago_socioeconomic_data WHERE hardship_index > 50.0; Correct answer: 38 ```Problem 3¶
What is the maximum value of hardship index in this dataset?¶
%sql SELECT MAX(hardship_index) FROM chicago_socioeconomic_data;
* sqlite:///socioeconomic.db Done.
| MAX(hardship_index) |
|---|
| 98.0 |
Click here for the solution
```python %sql SELECT MAX(hardship_index) FROM chicago_socioeconomic_data; Correct answer: 98.0 ```Problem 4¶
Which community area which has the highest hardship index?¶
%sql SELECT * FROM chicago_socioeconomic_data WHERE hardship_index= (SELECT MAX(hardship_index) FROM chicago_socioeconomic_data);
* sqlite:///socioeconomic.db Done.
| ca | community_area_name | percent_of_housing_crowded | percent_households_below_poverty | percent_aged_16_unemployed | percent_aged_25_without_high_school_diploma | percent_aged_under_18_or_over_64 | per_capita_income_ | hardship_index |
|---|---|---|---|---|---|---|---|---|
| 54.0 | Riverdale | 5.8 | 56.5 | 34.6 | 27.5 | 51.5 | 8201 | 98.0 |
Click here for the solution
```python #We can use the result of the last query to as an input to this query: %sql SELECT community_area_name FROM chicago_socioeconomic_data where hardship_index=98.0; #or another option: %sql SELECT community_area_name FROM chicago_socioeconomic_data ORDER BY hardship_index DESC LIMIT 1; #or you can use a sub-query to determine the max hardship index: %sql select community_area_name from chicago_socioeconomic_data where hardship_index = ( select max(hardship_index) from chicago_socioeconomic_data ); Correct answer: 'Riverdale' ```Problem 5¶
Which Chicago community areas have per-capita incomes greater than $60,000?¶
%sql SELECT * FROM chicago_socioeconomic_data WHERE per_capita_income_ > 60000;
* sqlite:///socioeconomic.db Done.
| ca | community_area_name | percent_of_housing_crowded | percent_households_below_poverty | percent_aged_16_unemployed | percent_aged_25_without_high_school_diploma | percent_aged_under_18_or_over_64 | per_capita_income_ | hardship_index |
|---|---|---|---|---|---|---|---|---|
| 6.0 | Lake View | 1.1 | 11.4 | 4.7 | 2.6 | 17.0 | 60058 | 5.0 |
| 7.0 | Lincoln Park | 0.8 | 12.3 | 5.1 | 3.6 | 21.5 | 71551 | 2.0 |
| 8.0 | Near North Side | 1.9 | 12.9 | 7.0 | 2.5 | 22.6 | 88669 | 1.0 |
| 32.0 | Loop | 1.5 | 14.7 | 5.7 | 3.1 | 13.5 | 65526 | 3.0 |
Click here for the solution
```python %sql SELECT community_area_name FROM chicago_socioeconomic_data WHERE per_capita_income_ > 60000; Correct answer:Lake View,Lincoln Park, Near North Side, Loop ```Problem 6¶
Create a scatter plot using the variables per_capita_income_ and hardship_index. Explain the correlation between the two variables.¶
import pandas as pd
import numpy as np
!pip install scipy
import scipy.stats
CSD=%sql SELECT * FROM chicago_socioeconomic_data;
df=pd.DataFrame(CSD)
j=sns.jointplot(x='per_capita_income_',y='hardship_index',data=df)
x=np.array(df['per_capita_income_'])
y=np.array(df['hardship_index'])
X=x[np.isnan(x)==False]
Y=y[np.isnan(y)==False]
minl=min([len(X),len(Y)])
X=X[0:minl]
Y=Y[0:minl]
#print(np.isnan(X))
#print(np.isnan(Y))
r,p=scipy.stats.pearsonr(X,Y)
#j.annotate(stats.pearsonr)
# if you choose to write your own legend, then you should adjust the properties then
phantom, = j.ax_joint.plot([], [], linestyle="", alpha=0)
j.ax_joint.legend([phantom],['r={:f}, p={:f}'.format(r,p)])
#j.ax_joint.legend([phantom],[f'r={round(r,3)}, p={float(p)}'])
Requirement already satisfied: scipy in c:\users\gamaliel\anaconda3\lib\site-packages (1.13.1) Requirement already satisfied: numpy<2.3,>=1.22.4 in c:\users\gamaliel\anaconda3\lib\site-packages (from scipy) (1.26.4) * sqlite:///socioeconomic.db Done.
<matplotlib.legend.Legend at 0x17f42347890>
Click here for the solution
```python # if the import command gives ModuleNotFoundError: No module named 'seaborn' # then uncomment the following line i.e. delete the # to install the seaborn package # !pip install seaborn !pip install matplotlib seaborn income_vs_hardship = %sql SELECT per_capita_income_, hardship_index FROM chicago_socioeconomic_data; plot = sns.jointplot(x='per_capita_income_',y='hardship_index', data=income_vs_hardship.DataFrame()) Correct answer:You can see that as Per Capita Income rises as the Hardship Index decreases. We see that the points on the scatter plot are somewhat closer to a straight line in the negative direction, so we have a negative correlation between the two variables. ```Conclusion¶
Now that you know how to do basic exploratory data analysis using SQL and python visualization tools, you can further explore this dataset to see how the variable per_capita_income_ is related to percent_households_below_poverty and percent_aged_16_unemployed. Try to create interesting visualizations!¶
Remember that equation for the pearson correlation in
\begin{equation*} r = \frac{ \sum_{i=1}^{n}(x_i-\bar{x})(y_i-\bar{y}) }{% \sqrt{\sum_{i=1}^{n}(x_i-\bar{x})^2}\sqrt{\sum_{i=1}^{n}(y_i-\bar{y})^2}} \end{equation*}Saving¶
import os
FromFld='C:\\Users\\Gamaliel\\Documents\\G\\ADD\\IBM_DS\\Py-Databases-SQL-DS\\Mine\\'
os.chdir()
try:
!jupyter nbconvert SQL-Py-Notes.ipynb --to html --template pj
except Exception as e:
print('HTML not stored')
This application is used to convert notebook files (*.ipynb)
to various other formats.
WARNING: THE COMMANDLINE INTERFACE MAY CHANGE IN FUTURE RELEASES.
Options
=======
The options below are convenience aliases to configurable class-options,
as listed in the "Equivalent to" description-line of the aliases.
To see all configurable class-options for some <cmd>, use:
<cmd> --help-all
--debug
set log level to logging.DEBUG (maximize logging output)
Equivalent to: [--Application.log_level=10]
--show-config
Show the application's configuration (human-readable format)
Equivalent to: [--Application.show_config=True]
--show-config-json
Show the application's configuration (json format)
Equivalent to: [--Application.show_config_json=True]
--generate-config
generate default config file
Equivalent to: [--JupyterApp.generate_config=True]
-y
Answer yes to any questions instead of prompting.
Equivalent to: [--JupyterApp.answer_yes=True]
--execute
Execute the notebook prior to export.
Equivalent to: [--ExecutePreprocessor.enabled=True]
--allow-errors
Continue notebook execution even if one of the cells throws an error and include the error message in the cell output (the default behaviour is to abort conversion). This flag is only relevant if '--execute' was specified, too.
Equivalent to: [--ExecutePreprocessor.allow_errors=True]
--stdin
read a single notebook file from stdin. Write the resulting notebook with default basename 'notebook.*'
Equivalent to: [--NbConvertApp.from_stdin=True]
--stdout
Write notebook output to stdout instead of files.
Equivalent to: [--NbConvertApp.writer_class=StdoutWriter]
--inplace
Run nbconvert in place, overwriting the existing notebook (only
relevant when converting to notebook format)
Equivalent to: [--NbConvertApp.use_output_suffix=False --NbConvertApp.export_format=notebook --FilesWriter.build_directory=]
--clear-output
Clear output of current file and save in place,
overwriting the existing notebook.
Equivalent to: [--NbConvertApp.use_output_suffix=False --NbConvertApp.export_format=notebook --FilesWriter.build_directory= --ClearOutputPreprocessor.enabled=True]
--no-prompt
Exclude input and output prompts from converted document.
Equivalent to: [--TemplateExporter.exclude_input_prompt=True --TemplateExporter.exclude_output_prompt=True]
--no-input
Exclude input cells and output prompts from converted document.
This mode is ideal for generating code-free reports.
Equivalent to: [--TemplateExporter.exclude_output_prompt=True --TemplateExporter.exclude_input=True --TemplateExporter.exclude_input_prompt=True]
--allow-chromium-download
Whether to allow downloading chromium if no suitable version is found on the system.
Equivalent to: [--WebPDFExporter.allow_chromium_download=True]
--disable-chromium-sandbox
Disable chromium security sandbox when converting to PDF..
Equivalent to: [--WebPDFExporter.disable_sandbox=True]
--show-input
Shows code input. This flag is only useful for dejavu users.
Equivalent to: [--TemplateExporter.exclude_input=False]
--embed-images
Embed the images as base64 dataurls in the output. This flag is only useful for the HTML/WebPDF/Slides exports.
Equivalent to: [--HTMLExporter.embed_images=True]
--sanitize-html
Whether the HTML in Markdown cells and cell outputs should be sanitized..
Equivalent to: [--HTMLExporter.sanitize_html=True]
--log-level=<Enum>
Set the log level by value or name.
Choices: any of [0, 10, 20, 30, 40, 50, 'DEBUG', 'INFO', 'WARN', 'ERROR', 'CRITICAL']
Default: 30
Equivalent to: [--Application.log_level]
--config=<Unicode>
Full path of a config file.
Default: ''
Equivalent to: [--JupyterApp.config_file]
--to=<Unicode>
The export format to be used, either one of the built-in formats
['PDFviaHTML', 'asciidoc', 'custom', 'html', 'latex', 'markdown', 'notebook', 'pdf', 'pdfviahtml', 'python', 'rst', 'script', 'slides', 'webpdf']
or a dotted object name that represents the import path for an
``Exporter`` class
Default: ''
Equivalent to: [--NbConvertApp.export_format]
--template=<Unicode>
Name of the template to use
Default: ''
Equivalent to: [--TemplateExporter.template_name]
--template-file=<Unicode>
Name of the template file to use
Default: None
Equivalent to: [--TemplateExporter.template_file]
--theme=<Unicode>
Template specific theme(e.g. the name of a JupyterLab CSS theme distributed
as prebuilt extension for the lab template)
Default: 'light'
Equivalent to: [--HTMLExporter.theme]
--sanitize_html=<Bool>
Whether the HTML in Markdown cells and cell outputs should be sanitized.This
should be set to True by nbviewer or similar tools.
Default: False
Equivalent to: [--HTMLExporter.sanitize_html]
--writer=<DottedObjectName>
Writer class used to write the
results of the conversion
Default: 'FilesWriter'
Equivalent to: [--NbConvertApp.writer_class]
--post=<DottedOrNone>
PostProcessor class used to write the
results of the conversion
Default: ''
Equivalent to: [--NbConvertApp.postprocessor_class]
--output=<Unicode>
overwrite base name use for output files.
can only be used when converting one notebook at a time.
Default: ''
Equivalent to: [--NbConvertApp.output_base]
--output-dir=<Unicode>
Directory to write output(s) to. Defaults
to output to the directory of each notebook. To recover
previous default behaviour (outputting to the current
working directory) use . as the flag value.
Default: ''
Equivalent to: [--FilesWriter.build_directory]
--reveal-prefix=<Unicode>
The URL prefix for reveal.js (version 3.x).
This defaults to the reveal CDN, but can be any url pointing to a copy
of reveal.js.
For speaker notes to work, this must be a relative path to a local
copy of reveal.js: e.g., "reveal.js".
If a relative path is given, it must be a subdirectory of the
current directory (from which the server is run).
See the usage documentation
(https://nbconvert.readthedocs.io/en/latest/usage.html#reveal-js-html-slideshow)
for more details.
Default: ''
Equivalent to: [--SlidesExporter.reveal_url_prefix]
--nbformat=<Enum>
The nbformat version to write.
Use this to downgrade notebooks.
Choices: any of [1, 2, 3, 4]
Default: 4
Equivalent to: [--NotebookExporter.nbformat_version]
Examples
--------
The simplest way to use nbconvert is
> jupyter nbconvert mynotebook.ipynb --to html
Options include ['PDFviaHTML', 'asciidoc', 'custom', 'html', 'latex', 'markdown', 'notebook', 'pdf', 'pdfviahtml', 'python', 'rst', 'script', 'slides', 'webpdf'].
> jupyter nbconvert --to latex mynotebook.ipynb
Both HTML and LaTeX support multiple output templates. LaTeX includes
'base', 'article' and 'report'. HTML includes 'basic', 'lab' and
'classic'. You can specify the flavor of the format used.
> jupyter nbconvert --to html --template lab mynotebook.ipynb
You can also pipe the output to stdout, rather than a file
> jupyter nbconvert mynotebook.ipynb --stdout
PDF is generated via latex
> jupyter nbconvert mynotebook.ipynb --to pdf
You can get (and serve) a Reveal.js-powered slideshow
> jupyter nbconvert myslides.ipynb --to slides --post serve
Multiple notebooks can be given at the command line in a couple of
different ways:
> jupyter nbconvert notebook*.ipynb
> jupyter nbconvert notebook1.ipynb notebook2.ipynb
or you can specify the notebooks list in a config file, containing::
c.NbConvertApp.notebooks = ["my_notebook.ipynb"]
> jupyter nbconvert --config mycfg.py
To see all available configurables, use `--help-all`.
import shutil
import os
#file2=Tofld+'P4DSNotes.html'
# The line above copies files from A -> B
#shutil.copy(os.path.join(FromFld,fileh), Tofld)
# The line above copies all the content from A -> B
#shutil.copytree(FromFld, Tofld)
import shutil
FromFld='C:\\Users\\Gamaliel\\Documents\\G\\ADD\\IBM_DS\\Py-Databases-SQL-DS\\Mine\\'
Tofld='C:\\Users\\Gamaliel\\Documents\\G\\ADD\\IBM_DS\\IBM_DS_Jupyter_Tasks\\Python4DataScience\\'
fileh='SQL-Py-Notes.html'
filep='P4DSNotes.pdf'
try:
if os.path.isfile(Tofld+'/'+fileh):
os.remove(Tofld+'/'+fileh)
print(fileh, 'deleted in', Tofld)
shutil.move(os.path.join(FromFld,fileh),os.path.join(Tofld,fileh))
print(fileh, 'replaced in', Tofld)
else:
shutil.move(os.path.join(FromFld,fileh),os.path.join(Tofld,fileh))
print(fileh, 'written in', Tofld)
except Exception as e:
print('HTML not moved')
HTML not moved